Search Results: "dato"

21 September 2020

Russ Allbery: Review: Lower Ed

Review: Lower Ed, by Tressie McMillan Cottom
Publisher: The New Press
Copyright: 2017
Printing: 2018
ISBN: 1-62097-472-X
Format: Kindle
Pages: 217
Lower Ed (subtitled The Troubling Rise of For-Profit Colleges in the New Economy) is the first book by sociologist Tressie McMillan Cottom. (I previously reviewed her second book, the excellent essay collection Thick.) It is a deep look at the sociology of for-profit higher education in the United States based on interviews with students and executives, analysis of Wall Street filings, tests of the admissions process, and her own personal experiences working for two of the schools. One of the questions that McMillan Cottom tries to answer is why students choose to enroll in these institutions, particularly the newer type of institution funded by federal student loans and notorious for being more expensive and less valuable than non-profit colleges and universities. I was hesitant to read this book because I find for-profit schools depressing. I grew up with the ubiquitous commercials, watched the backlash develop, and have a strongly negative impression of the industry, partly influenced by having worked in traditional non-profit higher education for two decades. The prevailing opinion in my social group is that they're a con job. I was half-expecting a reinforcement of that opinion by example, and I don't like reading infuriating stories about people being defrauded. I need not have worried. This is not that sort of book (nor, in retrospect, do I think McMillan Cottom would approach a topic from that angle). Sociology is broader than reporting. Lower Ed positions for-profit colleges within a larger social structure of education, credentialing, and changes in workplace expectations; takes a deep look at why they are attractive to their students; and humanizes and complicates the motives and incentives of everyone involved, including administrators and employees of for-profit colleges as well as the students. McMillan Cottom does of course talk about the profit motive and the deceptions surrounding that, but the context is less that of fraud that people are unable to see through and more a balancing of the drawbacks of a set of poor choices embedded in institutional failures. One of my metrics for a good non-fiction book is whether it introduces me to a new idea that changes how I analyze the world. Lower Ed does that twice. The first idea is the view of higher education through the lens of risk shifting. It used to be common for employers to hire people without prior job-specific training and do the training in-house, possibly through an apprenticeship structure. More notably, once one was employed by a particular company, the company routinely arranged or provided ongoing training. This went hand-in-hand with a workplace culture of long tenure, internal promotion, attempts to avoid layoffs, and some degree of mutual loyalty. Companies expected to invest significantly in an employee over their career and thus also had an incentive to retain that employee rather than train someone for a competitor. However, from a purely financial perspective, this is a risk and an inefficiency, similar to the risk of carrying a large inventory of parts and components. Companies have responded to investor-driven focus on profits and efficiency by reducing overhead and shifting risk. This leads to the lean supply chain, where no one pays for parts to sit around in warehouses and companies aren't caught with large stockpiles of now-useless components, but which is more sensitive to any disruption (such as from a global pandemic). And, for employment, it leads to a desire to hire pre-trained workers, retain only enough workers to do the current amount of work, and replace them with new workers who already have appropriate training rather than retrain them. The effect of the corporate decision to only hire pre-trained employees is to shift the risk and expense of training from the company to the prospective employee. The individual has to seek out training at their own expense in the hope (not guarantee) that at the conclusion of that training they will get or retain a job. People therefore turn to higher education to both provide that training and to help them decide what type of training will eventually be valuable. This has a long history with certain professional fields (doctors and lawyers, for example), but the requirements for completing training in those fields are relatively clear (a professional license to practice) and the compensation reflects the risk. What's new is the shift of training risk to the individual in more mundane jobs, without any corresponding increase in compensation. This, McMillan Cottom explains, is the background for the growth in demand for higher education in general and the the type of education offered by for-profit colleges in particular. Workers who in previous eras would be trained by their employers are now responsible for their own training. That training is no longer judged by the standards of a specific workplace, but is instead evaluated by a hiring process that expects constant job-shifting. This leads to increased demand by both workers and employers for credentials: some simple-to-check certificate of completion of training that says that this person has the skills to immediately start doing some job. It also leads to a demand for more flexible class hours, since the student is now often someone older with a job and a family to balance. Their ongoing training used to be considered a cost of business and happen during their work hours; now it is something they have to fit around the contours of their life because their employer has shifted that risk to them. The risk-shifting frame makes sense of the "investment" language so common in for-profit education. In this job economy, education as investment is not a weird metaphor for the classic benefits of a liberal arts education: broadened perspective, deeper grounding in philosophy and ethics, or heightened aesthetic appreciation. It's an investment in the literal financial sense; it is money that you spend now in order to get a financial benefit (a job) in the future. People have to invest in their own training because employers are no longer doing so, but still require the outcome of that investment. And, worse, it's primarily a station-keeping investment. Rather than an optional expenditure that could reap greater benefits later, it's a mandatory expenditure to prevent, at best, stagnation in a job paying poverty wages, and at worst the disaster of unemployment. This explains renewed demand for higher education, but why for-profit colleges? We know they cost more and have a worse reputation (and therefore their credentials have less value) than traditional non-profit colleges. Flexible hours and class scheduling explains some of this but not all of it. That leads to the second perspective-shifting idea I got from Lower Ed: for-profit colleges are very good at what they focus time and resources on, and they focus on enrolling students. It is hard to enroll in a university! More precisely, enrolling in a university requires bureaucracy navigation skills, and those skills are class-coded. The people who need them the most are the least likely to have them. Universities do not reach out to you, nor do they guide you through the process. You have to go to them and discover how to apply, something that is often made harder by the confusing state of many university web sites. The language and process is opaque unless other people in your family have experience with universities and can explain it. There might be someone you can reach on the phone to ask questions, but they're highly unlikely to proactively guide you through the remaining steps. It's your responsibility to understand deadlines, timing, and sequence of operations, and if you miss any of the steps (due to, for example, the overscheduled life of someone in need of better education for better job prospects), the penalty in time and sometimes money can be substantial. And admission is just the start; navigating financial aid, which most students will need, is an order of magnitude more daunting. Community colleges are somewhat easier (and certainly cheaper) than universities, but still have similar obstacles (and often even worse web sites). It's easy for people like me, who have long professional expertise with bureaucracies, family experience with higher education, and a support network of people to nag me about deadlines, to underestimate this. But the application experience at a for-profit college is entirely different in ways far more profound than I had realized. McMillan Cottom documents this in detail from her own experience working for two different for-profit colleges and from an experiment where she indicated interest in multiple for-profit colleges and then stopped responding before signing admission paperwork. A for-profit college is fully invested in helping a student both apply and get financial aid, devotes someone to helping them through that process, does not expect them to understand how to navigate bureaucracies or decipher forms on their own, does not punish unexpected delays or missed appointments, and goes to considerable lengths to try to keep anyone from falling out of the process before they are enrolled. They do not expect their students to already have the skills that one learns from working in white-collar jobs or from being surrounded by people who do. They provide the kind of support that an educational institution should provide to people who, by definition, don't understand something and need to learn. Reading about this was infuriating. Obviously, this effort to help people enroll is largely for predatory reasons. For-profit schools make their money off federal loans and they don't get that money unless they can get someone to enroll and fill out financial paperwork (and to some extent keep them enrolled), so admissions is their cash cow and they act accordingly. But that's not why I found it infuriating; that's just predictable capitalism. What I think is inexcusable is that nothing they do is that difficult. We could being doing the same thing for prospective community college students but have made the societal choice not to. We believe that education is valuable, we constantly advocate that people get more job training and higher education, and yet we demand prospective students navigate an unnecessarily baroque and confusing application process with very little help, and then stereotype and blame them for failing to do so. This admission support is not a question of resources. For-profit colleges are funded almost entirely by federally-guaranteed student loans. We are paying them to help people apply. It is, in McMillan Cottom's term, a negative social insurance program. Rather than buffering people against the negative effects of risk-shifting of employers by helping them into the least-expensive and most-effective training programs (non-profit community colleges and universities), we are spending tax dollars to enrich the shareholders of for-profit colleges while underfunding the alternatives. We are choosing to create a gap that routes government support to the institution that provides worse training at higher cost but is very good at helping people apply. It's as if the unemployment system required one to use payday lenders to get one's unemployment check. There is more in this book I want to talk about, but this review is already long enough. Suffice it to say that McMillan Cottom's analysis does not stop with market forces and the admission process, and the parts of her analysis that touch on my own personal experience as someone with a somewhat unusual college path ring very true. Speaking as a former community college student, the discussion of class credit transfer policies and the way that institutional prestige gatekeeping and the desire to push back against low-quality instruction becomes a trap that keeps students in the for-profit system deserves another review this length. So do the implications of risk-shifting and credentialism on the morality of "cheating" on schoolwork. As one would expect from the author of the essay "Thick" about bringing context to sociology, Lower Ed is personal and grounded. McMillan Cottom doesn't shy away from including her own experiences and being explicit about her sources and research. This is backed up by one of the best methodological notes sections I've seen in a book. One of the things I love about McMillan Cottom's writing is that it's solidly academic, not in the sense of being opaque or full of jargon (the text can be a bit dense, but I rarely found it hard to follow), but in the sense of being clear about the sources of knowledge and her methods of extrapolation and analysis. She brings her receipts in a refreshingly concrete way. I do have a few caveats. First, I had trouble following a structure and line of reasoning through the whole book. Each individual point is meticulously argued and supported, but they are not always organized into a clear progression or framework. That made Lower Ed feel at times like a collection of high-quality but somewhat unrelated observations about credentials, higher education, for-profit colleges, their student populations, their business models, and their relationships with non-profit schools. Second, there are some related topics that McMillan Cottom touches on but doesn't expand sufficiently for me to be certain I understood them. One of the big ones is credentialism. This is apparently a hot topic in sociology and is obviously important to this book, but it's referenced somewhat glancingly and was not satisfyingly defined (at least for me). There are a few similar places where I almost but didn't quite follow a line of reasoning because the book structure didn't lay enough foundation. Caveats aside, though, this was meaty, thought-provoking, and eye-opening, and I'm very glad that I read it. This is a topic that I care more about than most people, but if you have watched for-profit colleges with distaste but without deep understanding, I highly recommend Lower Ed. Rating: 8 out of 10

19 September 2020

Vincent Bernat: Keepalived and unicast over multiple interfaces

Keepalived is a Linux implementation of VRRP. The usual role of VRRP is to share a virtual IP across a set of routers. For each VRRP instance, a leader is elected and gets to serve the IP address, ensuring the high availability of the attached service. Keepalived can also be used for a generic leader election, thanks to its ability to use scripts for healthchecking and run commands on state change. A simple configuration looks like this:
vrrp_instance gateway1  
  state BACKUP          #  
  interface eth0        #  
  virtual_router_id 12  #  
  priority 101          #  
  virtual_ipaddress  
    2001:db8:ff/64
   
 
The state keyword in instructs Keepalived to not take the leader role when starting. Otherwise, incoming nodes create a temporary disruption by taking over the IP address until the election settles. The interface keyword in defines the interface for sending and receiving VRRP packets. It is also the default interface to configure the virtual IP address. The virtual_router_id directive in is common to all nodes sharing the virtual IP. The priority keyword in helps choosing which router will be elected as leader. If you need more information around Keepalived, be sure to check the documentation. VRRP design is tied to Ethernet networks and requires a multicast-enabled network for communication between nodes. In some environments, notably public clouds, multicast is unavailable. In this case, Keepalived can send VRRP packets using unicast:
vrrp_instance gateway1  
  state BACKUP
  interface eth0
  virtual_router_id 12
  priority 101
  unicast_peer  
    2001:db8::11
    2001:db8::12
   
  virtual_ipaddress  
    2001:db8:ff/64 dev lo
   
 
Another process, like a BGP daemon, should advertise the virtual IP address to the network . If needed, Keepalived can trigger whatever action is needed for this by using notify_* scripts. Until version 2.21 (not released yet), the interface directive is mandatory and Keepalived will transmit and receive VRRP packets on this interface only. If peers are reachable through several interfaces, like on a BGP on the host setup, you need a workaround. A simple one is to use a VXLAN interface:
$ ip -6 link add keepalived6 type vxlan id 6 dstport 4789 local 2001:db8::10 nolearning
$ bridge fdb append 00:00:00:00:00:00 dev keepalived6 dst 2001:db8::11
$ bridge fdb append 00:00:00:00:00:00 dev keepalived6 dst 2001:db8::12
$ ip link set up dev keepalived6
Learning of MAC addresses is disabled and one generic entry for each peer is added in the forwarding database: transmitted packets are broadcasted to all peers, notably VRRP packets. Have a look at VXLAN & Linux for additional details.
vrrp_instance gateway1  
  state BACKUP
  interface keepalived6
  mcast_src_ip 2001:db8::10
  virtual_router_id 12
  priority 101
  virtual_ipaddress  
    2001:db8:ff/64 dev lo
   
 
Starting from Keepalived 2.21, unicast_peer can be used without the interface directive. I think using VXLAN is still a neat trick applicable to other situations where communication using broadcast or multicast is needed, while the underlying network provide no support for this.

13 August 2020

Erich Schubert: Publisher MDPI lies to prospective authors

The publisher MDPI is a spammer and lies. If you upload a paper draft to arXiv, MDPI will send spam to the authors to solicit submission. Within minutes of an upload I received the following email (sent by MDPI staff, not some overly eager new editor):
We read your recent manuscript "[...]" on
arXiv, and sincerely invite you to submit it to our journal Future
Internet, if it has not been published or submitted elsewhere.
Future Internet (ISSN 1999-5903, indexed by Scopus, Ei compendex,
*ESCI*-Web of Science) is a journal on Internet technologies and the
information society. It maintains a rigorous and fast peer review system
with a median publication time of 35 days from submission to online
publication, and 3 days from acceptance to publication. The journal
scope is shown here:
https://www.mdpi.com/journal/futureinternet/about.
Editorial Board: https://www.mdpi.com/journal/futureinternet/editors.
Since Future Internet is an open access journal there is a publication
fee. Your paper will be published, with a 20% discount (amounting to 200
CHF), and provided that it is accepted after our standard peer-review
procedure. 
First of all, the email begins with a lie. Because this paper clearly states that it is submitted elsewhere. Also, it fits other journals much better, and if they had read even just the abstract, they would have known. This is predatory behavior by MDPI. Clearly, it is just about getting as many submissions as possible. The journal charges 1000 CHF (next year, 1400 CHF) to publish the papers. Its about the money. Also, there have been reports that MDPI ignores the reviews, and always publishes even when reviewers recommended rejection The reviewer requests I have received from MDPI came with unreasonable deadlines, which will not allow for a thorough peer review. Hence I asked to not ever be emailed by them again. I must assume that many other qualified reviewers do the same. MDPI boasts in their 2019 annual report a median time to first decision of 19 days in my discipline the typical time window to ask for reviews is at least a month (for shorter conference papers, not full journal articles), because professors tend to have lots of other duties, hence they need more flexibility. Above paper has been submitted in March, and is now under review for 4 months already. This is an annoying long time window, and I would appreciate if this were less, but it shows how extremely short the MDPI time frame is. They also claim 269.1k submissions and 106.2k published papers, so the acceptance rate is around 40% on average, and assuming that there are some journals with higher standards there then some must have acceptance rates much higher than this. I d assume that many reputable journals have 40% desk-rejection rate for papers that are not even on-topic The average cost to authors is given as 1144 CHF (after discounts, 25% waived feeds etc.), so they, so we are talking about 120 million CHF of revenue from authors. Is that what you want academic publishing to be? I am not happy with some of the established publishers such as Elsevier that also overcharge universities heavily. I do think we need to change academic publishing, and arXiv is a big improvement here. But I do not respect publishers such as MDPI that lie and send spam.

9 June 2020

Ingo Juergensmann: Jabber vs. XMPP

XMPP is widely - and mabye better - known as Jabber. This was more or less the same until Cisco bought Jabber Inc and the trademark. You can read more about the story on the XMPP.org website. But is there still a Jabber around? Yes, it is! But Cisco Jabber is a whole infrastructure environment: you can't use Cisco Jabber client on its own without the other required Cisco infrastructure as Cisco CUCM and CIsco IM&P servers. So you can't just setup Prosody or ejabberd on your Debian server and connect Cisco Jabber to it. But what are the differences of Cisco Jabber to "standard" XMPP clients? Cisco Jabber The above screenshot from the official Cisco Jabber product webpage shows the new, single view layout of the Cisco Webex Teams client, but you can configure the client to have the old, classic split view layout of Contact List and Chat Window. But as you can already see from above screenshot audio & video calls is one of the core functions of Cisco Jabber whereas this feature has been added only lately to the well-known Conversations XMPP client on Android. Conversations is using Jingle extension to XMPP whereas Jabber uses SIP for voice/video calls. You can even use Cisco Jabber to control your deskphone via CTI, which is a quite common setup for Jabber. In fact you can configure Jabber to be just a CTI client to you phone or a fully featured UC client. When you don't want to have Ciscos full set of on-premise servers, you can also use Cisco Jabber in conjunction with Cisco Webex as Cisco Webex Messenger. Or in conjunction with Webex Teams in Teams Messaging Mode. Last month Cisco announced general availability of XMPP federation for Webex Teams/Jabber in Teams Messaging Mode. With that you have basic functionality in Webex Teams. And when I say "basic" I really mean basic: you can have 1:1 chat only, no group chats (MUC) and no Presence status will be possible. Hopefully this is just the beginning and not the end of XMPP support in Webex Teams. XMPP Clients Well, I'm sure many of you know "normal" XMPP clients such as Gajim or Dino on Linux, Conversations on Android or Siskin/Monal/ChatSecure on Apple IOS. There are plenty of other clients of course and maybe you used an XMPP client in the past without knowing it. For example Jitsi Meet is based on XMPP and you can still download the Jitsi Desktop client and use it as a full-featured multi-protocol client, e.g. for XMPP and SIP. In fact Jitsi Desktop is maybe the client that comes closest to Cisco Jabber as a chat/voice/video client. In fact I already connected Jitsi Desktop to Cisco CUCM/IM&P infrastructure, but of course you won't be able to use all those Cisco proprietary extensions, but you can see the benefit of open, standardized protocols such as XMPP and SIP: you are free to use any standard compliant client that you want. So, while Jitsi supported voice/video calls for a long time, even before they focussed on Jitsi Meet as a WebRTC based conference service, Conversations added this feature last month, as already stated. This had a huge effect to the whole XMPP federation, because you need an XMPP server that supports XEP-0215 to make these audio/video calls work. The well-known Compliance Tester listed the STUN/TURN features first as "Informational Tests", but quickly made this a mandatory test to pass tests and gain 100% on the Compliance Tester. But you cannot place SIP calls to other sides, because that's a different thing. As many of you are familiar with standard XMPP clients, I'll focus now on some similarity and differences between Cisco Jabber and standard XMPP... Similarities & Differences First, you can federate with Cisco Jabber users. Cisco IM&P can use standard XMPP federation with all other XMPP standard compliant servers. This is really a big benefit and way better than other solutions that usually results in vendor lock-in. Depending on the setup, you can even join from your own XMPP client in MUCs (Multi User Chats), which Cisco calls "Persistent Chat Room". The other way is not that simple: basically it is possible to join with Cisco Jabber in a MUC on a random server, but it is not as easy as you might thing. Cisco Jabber simply lacks a way to enter a room JID (as you can find them on https://search.jabber.network/. Instead you need to be added as participant by a moderator or an admin in that 3rd party MUC. Managed File Transfers is another issue. Cisco Jabber supports Peer-to-Peer file transfers and Managed File Transfers, where the uploaded file get transferred to an SFTP server as storage backend and where the IM&P server is handling the transfer via HTTPS. You can find a schematic drawing in the Configuration Guides. Although it appears similar to HTTP Upload as defined in XEP-0363, it is not very likely that it will work. I haven't tested it yet, because in my test scenario there is a gatekeeper in the path: Cisco Expressway doesn't support (yet) Managed File Transfer, but you can upvote the idea in the ideas management of Cisco or other ideas such as OMEMO support. OMEMO support? Yes, there is no end-to-end encryption (E2EE) currently planned for Cisco Jabber, while it is common nowadays for most modern XMPP clients. I think it would be good for Cisco Jabber to also (optionally) support OMEMO or its successor. Messaging clients without E2EE are not state of the art anymore. Whereas Conversations is the de-facto standard on Android, Apple IOS devices are still lacking a similar well-working client. See my blog post "XMPP - Fun with Clients" for a summary. In that regard Cisco Jabber might be the best XMPP client for IOS to some degree: you have working messaging, voice/video calls, Push Notifications and integration into Apples Call Kit. There are most likely many, many more differences and issues between Cisco Jabber and standard compliant XMPP servers and clients. But basically Cisco Jabber is still based on XMPP and extends that by proprietary extensions. Summary While I have the impression that the free clients and servers are well doing and increased development in the past years (thanks to Conversations and the Compliance Tester), the situation of Cisco Jabber is a little different. As a customer you can sometimes get the impression that Cisco has lost interest in developing Cisco Jabber. It got better in the last years, but when Cisco Spark was introduced some years ago, the impression was that Cisco is heavily focussed on Spark (now: Webex Teams). It's not like Cisco is not listening to customers or the development has been stopped on Jabber, but my impression is that most customers don't give feedback or tell Cisco as the vendor what they want. You can either submit ideas via the Colaboration Customer Ideas Tool or provide feedback via your Cisco and partner channels. I think it is important for the XMPP community to also have a large enterprise level vendor like Cisco. Otherwise the Internet will become more and more an Internet of closed silos like MS Teams, Slack, Facebook, etc. Of course there are other companies like ProcessOne (ejabberd) or Tigase, but I think you agree that Cisco is another level.
Kategorie:

10 May 2020

Russ Allbery: Review: Golden Gates

Review: Golden Gates, by Conor Dougherty
Publisher: Penguin
Copyright: 2020
ISBN: 0-525-56022-X
Format: Kindle
Pages: 249
This review, for reasons that will hopefully become clear later, starts with a personal digression. I have been interested in political theory my entire life. That sounds like something admirable, or at least neutral. It's not. "Interested" means that I have opinions that are generally stronger than my depth of knowledge warrants. "Interested" means that I like thinking about and casting judgment on how politics should be done without doing the work of politics myself. And "political theory" is different than politics in important ways, not the least of which is that political actions have rarely been a direct danger to me or my family. I have the luxury of arguing about politics as a theory. In short, I'm at high risk of being one of those people who has an opinion about everything and shares it on Twitter. I'm still in the process (to be honest, near the beginning of the process) of making something useful out of that interest. I've had some success when I become enough a part of a community that I can do some of the political work, understand the arguments at a level deeper than theory, and have to deal with the consequences of my own opinions. But those communities have been on-line and relatively low stakes. For the big political problems, the ones that involve governments and taxes and laws, those that decide who gets medical treatment and income support and who doesn't, to ever improve, more people like me need to learn enough about the practical details that we can do the real work of fixing them, rather than only making our native (and generally privileged) communities better for ourselves. I haven't found my path helping with that work yet. But I do have a concrete, challenging, local political question that makes me coldly furious: housing policy. Hence this book. Golden Gates is about housing policy in the notoriously underbuilt and therefore incredibly expensive San Francisco Bay Area, where I live. I wanted to deepen that emotional reaction to the failures of housing policy with facts and analysis. Golden Gates does provide some of that. But this also turns out to be a book about the translation of political theory into practice, about the messiness and conflict that results, and about the difficult process of measuring success. It's also a book about how substantial agreement on the basics of necessary political change can still founder on the shoals of prioritization, tribalism, and people who are interested in political theory. In short, it's a book about the difficulty of changing the world instead of arguing about how to change it. This is not a direct analysis of housing policy, although Dougherty provides the basics as background. Rather, it's the story of the political fight over housing told primarily through two lenses: Sonja Trauss, founder of BARF (the Bay Area Renters' Federation); and a Redwood City apartment complex, the people who fought its rent increases, and the nun who eventually purchased it. Around that framework, Dougherty writes about the Howard Jarvis Taxpayers Association and the history of California's Proposition 13, a fight over a development in Lafayette, the logistics challenge of constructing sufficient housing even when approved, and the political career of Scott Wiener, the hated opponent of every city fighting for the continued ability to arbitrarily veto any new housing. One of the things Golden Gates helped clarify for me is that there are three core interest groups that have to be part of any discussion of Bay Area housing: homeowners who want to limit or eliminate local change, renters who are vulnerable to gentrification and redevelopment, and the people who want to live in that area and can't (which includes people who want to move there, but more sympathetically includes all the people who work there but can't afford to live locally, such as teachers, day care workers, food service workers, and, well, just about anyone who doesn't work in tech). (As with any political classification, statements about collectives may not apply to individuals; there are numerous people who appear to fall into one group but who vote in alignment with another.) Dougherty makes it clear that housing policy is intractable in part because the policies that most clearly help one of those three groups hurt the other two. As advertised by the subtitle, Dougherty's focus is on the fight for more housing. Those who already own homes whose values have been inflated by artificial scarcity, or who want to preserve such stratified living conditions as low-density, large-lot single-family dwellings within short mass-transit commute of one of the densest cities in the United States, don't get a lot of sympathy or focus here except as opponents. I understand this choice; I also don't have much sympathy. But I do wish that Dougherty had spent more time discussing the unsustainable promise that California has implicitly made to homeowners: housing may be impossibly expensive, but if you can manage to reach that pinnacle of financial success, the ongoing value of your home is guaranteed. He does mention this in passing, but I don't think he puts enough emphasis on the impact that a single huge, illiquid investment that is heavily encouraged by government policy has on people's attitude towards anything that jeopardizes that investment. The bulk of this book focuses on the two factions trying to make housing cheaper: Sonja Trauss and others who are pushing for construction of more housing, and tenant groups trying to manage the price of existing housing for those who have to rent. The tragedy of Bay Area housing is that even the faintest connection of housing to the economic principle of supply and demand implies that the long-term goals of those two groups align. Building more housing will decrease the cost of housing, at least if you build enough of it over a long enough period of time. But in the short term, particularly given the amount of Bay Area land pre-emptively excluded from housing by environmental protection and the actions of the existing homeowners, building more housing usually means tearing down cheap lower-density housing and replacing it with expensive higher-density housing. And that destroys people's lives. I'll admit my natural sympathy is with Trauss on pure economic grounds. There simply aren't enough places to live in the Bay Area, and the number of people in the area will not decrease. To the marginal extent that growth even slows, that's another tale of misery involving "super commutes" of over 90 minutes each way. But the most affecting part of this book was the detailed look at what redevelopment looks like for the people who thought they had housing, and how it disrupts and destroys existing communities. It's impossible to read those stories and not be moved. But it's equally impossible to not be moved by the stories of people who live in their cars during the week, going home only on weekends because they have to live too far away from their jobs to commute. This is exactly the kind of politics that I lose when I take a superficial interest in political theory. Even when I feel confident in a guiding principle, the hard part of real-world politics is bringing real people with you in the implementation and mitigating the damage that any choice of implementation will cause. There are a lot of details, and those details matter. Without the right balance between addressing a long-term deficit and providing short-term protection and relief, an attempt to alleviate unsustainable long-term misery creates more short-term misery for those least able to afford it. And while I personally may have less sympathy for the relatively well-off who have clawed their way into their own mortgage, being cavalier with their goals and their financial needs is both poor ethics and poor politics. Mobilizing political opponents who have resources and vote locally isn't a winning strategy. Dougherty is a reporter, not a housing or public policy expert, so Golden Gates poses problems and tells stories rather than describes solutions. This book didn't lead me to a brilliant plan for fixing the Bay Area housing crunch, or hand me a roadmap for how to get effectively involved in local politics. What it did do is tell stories about what political approaches have worked, how they've worked, what change they've created, and the limitations of that change. Solving political problems is work. That work requires understanding people and balancing concerns, which in turn requires a lot of empathy, a lot of communication, and sometimes finding a way to make unlikely allies. I'm not sure how broad the appeal of this book will be outside of those who live in the region. Some aspects of the fight for housing generalize, but the Bay Area (and I suspect every region) has properties specific to it or to the state of California. It has also reached an extreme of housing shortage that is rivaled in the United States only by New York City, which changes the nature of the solutions. But if you want to seriously engage with Bay Area housing policy, knowing the background explained here is nearly mandatory. There are some flaws I wish Dougherty would have talked more about traffic and transit policy, although I realize that could be another book but this is an important story told well. If this somewhat narrow topic is within your interests, highly recommended. Rating: 8 out of 10

19 April 2020

Enrico Zini: Little wonders

Gibbsdavidl/CatterPlots
devel
Did you ever wish you could make scatter plots with cat shaped points? Now you can! - Gibbsdavidl/CatterPlots
What is the best tool to use for drawing vector pictures? For me and probably for many others, the answer is pretty obvious: Illustrator, or, maybe, Inkscape.
A coloring book to help folks understand how SELinux works. - mairin/selinux-coloring-book
The EURion constellation (also known as Omron rings[1] or doughnuts[2]) is a pattern of symbols incorporated into a number of banknote designs worldwide since about 1996. It is added to help imaging software detect the presence of a banknote in a digital image. Such software can then block the user from reproducing banknotes to prevent counterfeiting using colour photocopiers. According to research from 2004, the EURion constellation is used for colour photocopiers but probably not used in computer software.[3] It has been reported that Adobe Photoshop will not allow editing of an image of a banknote, but in some versions this is believed to be due to a different, unknown digital watermark rather than the EURion constellation.[4][3]
This huge collection of non-scary optical illusions and fascinating visual phenomena emphasizes interactive exploration, beauty, and scientific explanation.
Generated photos are created from scratch by AI systems. All images can be used for any purpose without worrying about copyrights, distribution rights, infringement claims, or royalties.
Dokumentarfilm ber die Rangierer im Bahnhof Dresden-Friedrichstadt in der DDR aus dem Jahr 1984.
Il termine sardo femina accabadora, femina agabbad ra o, pi comunemente, agabbadora o accabadora (s'agabbad ra, lett. "colei che finisce", deriva dal sardo s'acabbu, "la fine" o dallo spagnolo acabar, "terminare") denota la figura storicamente incerta di una donna che si incaricava di portare la morte a persone di qualunque et , nel caso in cui queste fossero in condizioni di malattia tali da portare i familiari o la stessa vittima a richiederla. In realt non ci sono prove di tale pratica, che avrebbe riguardato alcune regioni sarde come Marghine, Planargia e Gallura[1]. La pratica non doveva essere retribuita dai parenti del malato poich il pagare per dare la morte era contrario ai dettami religiosi e della superstizione.
Alright the people have spoken and they want more cat genetics. So, I present to you all "Cat Coat Genetics 101: A Tweetorial", feat. pics of many real life cats (for science, of course...this baby is Caterpillar).

15 April 2020

Antoine Beaupr : OpenDKIM configuration to send debian.org email

Debian.org added support for DKIM in 2020. To configure this on my side, I had to do the following, on top of my email configuration.
  1. add this line to /etc/opendkim/signing.table:
    *@debian.org marcos-debian.anarcat.user
    
  2. add this line to /etc/opendkim/key.table:
    marcos-debian.anarcat.user debian.org:marcos-debian.anarcat.user:/etc/opendkim/keys/marcos-debian.anarcat.user.private
    
    Yes, that's quite a mouthful! That magic selector is long in that way because it needs a special syntax (specifically the .anarcat.user suffix) for Debian to be happy. The -debian string is to tell me where the key is published. The marcos prefix is to remind me where the private is used.
  3. generate the key with:
    opendkim-genkey --directory=/etc/opendkim/keys/ --selector=marcos-debian.anarcat.user --domain=debian.org --verbose
    
    This creates the DNS record in /etc/opendkim/keys/marcos-debian.anarcat.user.txt (alongside the private key in .key).
  4. restart OpenDKIM:
    service opendkim restart
    
    The DNS record will look something like this:
    marcos-debian.anarcat.user._domainkey   IN  TXT ( "v=DKIM1; h=sha256; k=rsa; "
    "p=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAtKzBK2f8vg5yV307WAOatOhypQt3ANQ95iDaewkVehmx42lZ6b4PzA1k5DkIarxjkk+7m6oSpx5H3egrUSLMirUiMGsIb5XVGBPFmKZhDVmC7F5G1SV7SRqqKZYrXTufRRSne1eEtA31xpMP0B32f6v6lkoIZwS07yQ7DDbwA9MHfyb6MkgAvDwNJ45H4cOcdlCt0AnTSVndcl"
    "pci5/2o/oKD05J9hxFTtlEblrhDXWRQR7pmthN8qg4WaNI4WszbB3Or4eBCxhUdvAt2NF9c9eYLQGf0jfRsbOcjSfeus0e2fpsKW7JMvFzX8+O5pWfSpRpdPatOt80yy0eqpm1uQIDAQAB" )  ; ----- DKIM key marcos-debian.anarcat.user for debian.org
    
  5. The "p=MIIB..." string needs to be joined together, without the quotes and the p=, and sent in a signed email to changes@db.debian.org:
    -----BEGIN PGP SIGNED MESSAGE-----
    dkimPubKey: marcos.anarcat.user MIIB[...]
    -----BEGIN PGP SIGNATURE-----
    [...]
    
  6. Wait a few minutes for DNS to propagate. You can check if they have with:
    host -t TXT marcos-debian.anarcat.user._domainkey.debian.org nsp.dnsnode.net
    
    (nsp.dnsnode.net being one of the NS records of the debian.org zone.)
If all goes well, the tests should pass when sending from your server as anarcat@debian.org.

Testing Test messages can be sent to dkimvalidator, mail-tester.com or check-auth@verifier.port25.com. Those tools will run Spamassassin on the received emails and report the results. What you are looking for is:
  • -0.1 DKIM_VALID: Message has at least one valid DKIM or DK signature
  • -0.1 DKIM_VALID_AU: Message has a valid DKIM or DK signature from author's domain
  • -0.1 DKIM_VALID_EF: Message has a valid DKIM or DK signature from envelope-from domain
If one of those is missing, then you are doing something wrong and your "spamminess" score will be worse. The latter is especially tricky as it validates the "Envelope From", which is the MAIL FROM: header as sent by the originating MTA, which you see as from=<> in the postfix lost. The following will happen anyways, as soon as you have a signature, that's normal:
  • 0.1 DKIM_SIGNED: Message has a DKIM or DK signature, not necessarily valid
And this might happen if you have a ADSP record but do not correctly sign the message with a domain field that matches the record:
  • 1.1 DKIM_ADSP_ALL No valid author signature, domain signs all mail
That's bad and will affect your spam core badly. I fixed that issue by using a wildcard key in the key table:
--- a/opendkim/key.table
+++ b/opendkim/key.table
@@ -1 +1 @@
-marcos anarc.at:marcos:/etc/opendkim/keys/marcos.private
+marcos %:marcos:/etc/opendkim/keys/marcos.private

References This is a copy of a subset of my more complete email configuration.

11 April 2020

Fran ois Marier: Using Gogo WiFi on Linux

Gogo, the WiFi provider for airlines like Air Canada, is not available to Linux users even though it advertises "access using any Wi-Fi enabled laptop, tablet or smartphone". It is however possible to work-around this restriction by faking your browser user agent. I tried the User-Agent Switcher for Chrome extension on Chrome and Brave but it didn't work for some reason. What did work was using Firefox and adding the following prefs in about:config to spoof its user agent to Chrome for Windows:
general.useragent.override=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.117 Safari/537.36
general.useragent.updates.enabled=false
privacy.resistFingerprinting=false
The last two prefs are necessary in order for the hidden general.useragent.override pref to not be ignored.

Opt out of mandatory arbitration As an aside, the Gogo terms of service automatically enroll you into mandatory arbitration unless you opt out by sending an email to customercare@gogoair.com within 30 days of using their service. You may want to create an email template for this so that you can fire off a quick email to them as soon as you connect. I will probably write a script for it next time I use this service.

6 April 2020

Russ Allbery: Review: Thick

Review: Thick, by Tressie McMillan Cottom
Publisher: The New Press
Copyright: 2019
ISBN: 1-62097-437-1
Format: Kindle
Pages: 247
Tressie McMillan Cottom is an associate professor of sociology at Virginia Commonwealth University. I first became aware of her via retweets and recommendations from other people I follow on Twitter, and she is indeed one of the best writers on that site. Thick: And Other Essays is an essay collection focused primarily on how American culture treats black women. I will be honest here, in part because I think much of the regular audience for my book reviews is similar to me (white, well-off from working in tech, and leftist but privileged) and therefore may identify with my experience. This is the sort of book that I always want to read and then struggle to start because I find it intimidating. It received a huge amount of praise on release, including being named as a finalist for the National Book Award, and that praise focused on its incisiveness, its truth-telling, and its depth and complexity. Complex and incisive books about racism are often hard for me to read; they're painful, depressing, and infuriating, and I have to fight my tendency to come away from them feeling more cynical and despairing. (Despite loving his essays, I'm still procrastinating reading Ta-Nehisi Coates's books.) I want to learn and understand but am not good at doing anything with the information, so this reading can feel like homework. If that's also your reaction, read this book. I regret having waited as long as I did. Thick is still, at times, painful, depressing, and infuriating. It's also brilliantly written in a way that makes the knowledge being conveyed easier to absorb. Rather than a relentless onslaught of bearing witness (for which, I should stress, there is an important place), it is a scalpel. Each essay lays open the heart of a subject in a few deft strokes, points out important features that the reader has previously missed, and then steps aside, leaving you alone with your thoughts to come to terms with what you've just learned. I needed this book to be an essay collection, with each thought just long enough to have an impact and not so long that I became numb. It's the type of collection that demands a pause at the end of each essay, a moment of mental readjustment, and perhaps a paging back through the essay again to remember the sharpest points. The essays often start with seeds of the personal, drawing directly on McMillan Cottom's own life to wrap context around their point. In the first essay, "Thick," she uses advice given her younger self against writing too many first-person essays to talk about the writing form, its critics, and how the backlash against it has become part of systematic discrimination because black women are not allowed to write any other sort of authoritative essay. She then draws a distinction between her own writing and personal essays, not because she thinks less of that genre but because that genre does not work for her as a writer. The essays in Thick do this repeatedly. They appear to head in one direction, then deepen and shift with the added context of precise sociological analysis, defying predictability and reaching a more interesting conclusion than the reader had expected. And, despite those shifts, McMillan Cottom never lost me in a turn. This is a book that is not only comfortable with complexity and nuance, but helps the reader become comfortable with that complexity as well. The second essay, "In the Name of Beauty," is perhaps my favorite of the book. Its spark was backlash against an essay McMillan Cottom wrote about Miley Cyrus, but the topic of the essay wasn't what sparked the backlash.
What many black women were angry about was how I located myself in what I'd written. I said, blithely as a matter of observable fact, that I am unattractive. Because I am unattractive, the argument went, I have a particular kind of experience of beauty, race, racism, and interacting with what we might call the white gaze. I thought nothing of it at the time I was writing it, which is unusual. I can usually pinpoint what I have said, written, or done that will piss people off and which people will be pissed off. I missed this one entirely.
What follows is one of the best essays on the social construction of beauty I've ever read. It barely pauses at the typical discussion of unrealistic beauty standards as a feminist issue, instead diving directly into beauty as whiteness, distinguishing between beauty standards that change with generations and the more lasting rules that instead police the bounds between white and not white. McMillan Cottom then goes on to explain how beauty is a form of capital, a poor and problematic one but nonetheless one of the few forms of capital women have access to, and therefore why black women have fought to be included in beauty despite all of the problems with judging people by beauty standards. And the essay deepens from there into a trenchant critique of both capitalism and white feminism that is both precise and illuminating.
When I say that I am unattractive or ugly, I am not internalizing the dominant culture's assessment of me. I am naming what has been done to me. And signaling who did it. I am glad that doing so unsettles folks, including the many white women who wrote to me with impassioned cases for how beautiful I am. They offered me neoliberal self-help nonsense that borders on the religious. They need me to believe beauty is both achievable and individual, because the alternative makes them vulnerable.
I could go on. Every essay in this book deserves similar attention. I want to quote from all of them. These essays are about racism, feminism, capitalism, and economics, all at the same time. They're about power, and how it functions in society, and what it does to people. There is an essay about Obama that contains the most concise explanation for his appeal to white voters that I've read. There is a fascinating essay about the difference between ethnic black and black-black in U.S. culture. There is so much more.
We do not share much in the U.S. culture of individualism except our delusions about meritocracy. God help my people, but I can talk to hundreds of black folks who have been systematically separated from their money, citizenship, and personhood and hear at least eighty stories about how no one is to blame but themselves. That is not about black people being black but about people being American. That is what we do. If my work is about anything it is about making plain precisely how prestige, money, and power structure our so-called democratic institutions so that most of us will always fail.
I, like many other people in my profession, was always more comfortable with the technical and scientific classes in college. I liked math and equations and rules, dreaded essay courses, and struggled to engage with the mandatory humanities courses. Something that I'm still learning, two decades later, is the extent to which this was because the humanities are harder work than the sciences and I wasn't yet up to the challenge of learning them properly. The problems are messier and more fluid. The context required is broader. It's harder to be clear and precise. And disciplines like sociology deal with our everyday lived experience, which means that we all think we're entitled to an opinion. Books like this, which can offer me a hand up and a grounding in the intellectual rigor while simultaneously being engaging and easy to read, are a treasure. They help me fill in the gaps in my education and help me recognize and appreciate the depth of thought in disciplines that don't come as naturally to me. This book was homework, but the good kind, the kind that exposes gaps in my understanding, introduces topics I hadn't considered, and makes the time fly until I come up for air, awed and thinking hard. Highly recommended. Rating: 9 out of 10

3 April 2020

Dirk Eddelbuettel: RcppSimdJson 0.0.4: Even Faster Upstream!

A new (upstream) simdjson release was announced by Daniel Lemire earlier this week, and my Twitter mentions have been running red-hot ever since as he was kind enough to tag me. Do look at that blog post, there is some impressive work in there. We wrapped up the (still very simple) rcppsimdjson around it last night and shipped it this morning. RcppSimdJson wraps the fantastic and genuinely impressive simdjson library by Daniel Lemire. Via some very clever algorithmic engineering to obtain largely branch-free code, coupled with modern C++ and newer compiler instructions, it results in parsing gigabytes of JSON parsed per second which is quite mindboggling. For illustration, I highly recommend the video of the recent talk by Daniel Lemire at QCon (which was also voted best talk). The best-case performance is faster than CPU speed as use of parallel SIMD instructions and careful branch avoidance can lead to less than one cpu cycle use per byte parsed. This release brings upstream 0.3 (and 0.3.1) plus a minor tweak (also shipped back upstream). Our full NEWS entry follows.

Changes in version 0.0.4 (2020-04-03)
  • Upgraded to new upstream releases 0.3 and 0.3.1 (Dirk in #9 closing #8)
  • Updated example validateJSON to API changes.

But because Daniel is such a fantastic upstream developer to collaborate with, he even filed a full feature-request maybe you can consider upgrading as issue #8 at our repo containing the fully detailed list of changes. As it is so impressive I will simple quote the upper half of just the major changes:

Highlights
  • Multi-Document Parsing: Read a bundle of JSON documents (ndjson) 2-4x faster than doing it individually. API docs / Design Details
  • Simplified API: The API has been completely revamped for ease of use, including a new JSON navigation API and fluent support for error code and exception styles of error handling with a single API. Docs
  • Exact Float Parsing: Now simdjson parses floats flawlessly without any performance loss (https://github.com/simdjson/simdjson/pull/558). Blog Post
  • Even Faster: The fastest parser got faster! With a shiny new UTF-8 validator and meticulously refactored SIMD core, simdjson 0.3 is 15% faster than before, running at 2.5 GB/s (where 0.2 ran at 2.2 GB/s).

For questions, suggestions, or issues please use the issue tracker at the GitHub repo. Courtesy of CRANberries, there is also a diffstat report for this release. If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

3 November 2017

Reproducible builds folks: Reproducible Builds: Weekly report #131

Here's what happened in the Reproducible Builds effort between Sunday October 22 and Saturday October 28 2017: Past Events Upcoming/current events Documentation updates Bernhard Wiedemann started The Unreproducible Package which "is meant as a practical way to demonstrate the various ways that software can break reproducible builds using just low level primitives without requiring external existing programs that implement these primitives themselves. It is structured so that one subdirectory demonstrates one class of issues in some variants observed in the wild." Reproducible work in other projects Hush, a fork of ZCash, opened an issue into reproducible builds. A new tag was added to lintian (lint checker for Debian packages) to ensure that changelog entry timestamps are strictly increasing. This avoids certain real-world issues with identical timestamps, documented in Debian #843773. Packages reviewed and fixed, and bugs filed Patches sent upstream: Debian bug reports: Reviews of unreproducible packages 14 package reviews have been added, 35 have been updated and 28 have been removed in this week, adding to our knowledge about identified issues. 1 issue type has been updated: Weekly QA work During our reproducibility testing, FTBFS bugs have been detected and reported by: strip-nondeterminism development Version 0.040-1 was uploaded to unstable by Mattia Rizzolo. It included contributions already covered by posts of the previous weeks, as well as new ones from: reprotest development Development continued in git: buildinfo.debian.net development Development continued in git: reproducible-website development Misc. This week's edition was written by Ximin Luo, Chris Lamb, Bernhard M. Wiedemann and Holger Levsen & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

9 October 2017

Markus Koschany: My Free Software Activities in September 2017

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you re interested in Java, Games and LTS topics, this might be interesting for you. Debian Games Debian Java Debian LTS This was my nineteenth month as a paid contributor and I have been paid to work 15,75 hours on Debian LTS, a project started by Rapha l Hertzog. In that time I did the following: Misc QA upload Thanks for reading and see you next time.

1 October 2017

Paul Wise: FLOSS Activities September 2017

Changes

Issues

Review

Administration
  • icns: merged patches
  • Debian: help guest user with access, investigate/escalate broken network, restart broken stunnels, investigate static.d.o storage, investigate weird RAID mails, ask hoster to investigate power issue,
  • Debian mentors: lintian/security updates & reboot
  • Debian wiki: merged & deployed patch, redirect DDTSS translator, redirect user support requests, whitelist email addresses, update email for accounts with bouncing email,
  • Debian derivatives census: merged/deployed patches
  • Debian PTS: debugged cron mails, deployed changes, reran scripts, fixed configuration file
  • Openmoko: debug reboot issue, debug load issues

Communication

Sponsors The samba bug was sponsored by my employer. All other work was done on a volunteer basis.

12 September 2017

Markus Koschany: My Free Software Activities in August 2017

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you re interested in Java, Games and LTS topics, this might be interesting for you. DebConf 17 in Montreal I traveled to DebConf 17 in Montreal/Canada. I arrived on 04. August and met a lot of different people which I only knew by name so far. I think this is definitely one of the best aspects of real life meetings, putting names to faces and getting to know someone better. I totally enjoyed my stay and I would like to thank all the people who were involved in organizing this event. You rock! I also gave a talk about the The past, present and future of Debian Games , listened to numerous other talks and got a nice sunburn which luckily turned into a more brownish color when I returned home on 12. August. The only negative experience I made was with my airline which was supposed to fly me home to Frankfurt again. They decided to cancel the flight one hour before check-in for unknown reasons and just gave me a telephone number to sort things out. No support whatsoever. Fortunately (probably not for him) another DebConf attendee suffered the same fate and together we could find another flight with Royal Air Maroc the same day. And so we made a short trip to Casablanca/Morocco and eventually arrived at our final destination in Frankfurt a few hours later. So which airline should you avoid at all costs (they still haven t responded to my refund claims) ? It s WoW-Air from Iceland. (just wow) Debian Games Debian Java Debian LTS This was my eighteenth month as a paid contributor and I have been paid to work 20,25 hours on Debian LTS, a project started by Rapha l Hertzog. In that time I did the following: Non-maintainer upload Thanks for reading and see you next time.

10 September 2017

Sylvain Beucler: dot-zed archive file format

TL,DR: I reverse-engineered the .zed encrypted archive format.
Following a clean-room design, I'm providing a description that can be implemented by a third-party.
Interested? :) (reference version at: https://www.beuc.net/zed/) .zed archive file format Introduction Archives with the .zed extension are conceptually similar to an encrypted .zip file. In addition to a specific format, .zed files support multiple users: files are encrypted using the archive master key, which itself is encrypted for each user and/or authentication method (password, RSA key through certificate or PKCS#11 token). Metadata such as filenames is partially encrypted. .zed archives are used as stand-alone or attached to e-mails with the help of a MS Outlook plugin. A variant, which is not covered here, can encrypt/decrypt MS Windows folders on the fly like ecryptfs. In the spirit of academic and independent research this document provides a description of the file format and encryption algorithms for this encrypted file archive. See the conventions section for conventions and acronyms used in this document. Structure overview The .zed file format is composed of several layers. Or as a diagram:
+----------------------------------------------------------------------------------------------------+
  .zed archive (MS-CBF)                                                                               
                                                                                                      
   stream #1                         stream #2                       stream #3...                     
  +------------------------------+  +---------------------------+  +---------------------------+      
    metadata (MS-OLEPS)               encryption (AES)               encryption (AES)                 
                                      512-bytes chunks               512-bytes chunks                 
    +--------------------------+                                                                      
      obfuscation (static key)        +-----------------------+      +-----------------------+        
      +----------------------+       -  compression (zlib)     -    -  compression (zlib)     -       
       _ctlfile (TLV)                                                                            ...  
      +----------------------+          +---------------+              +---------------+               
    +--------------------------+          file contents                  file contents                
                                                                                                      
    +--------------------------+     -  +---------------+      -    -  +---------------+      -       
      _catalog (TLV)                                                                                  
    +--------------------------+      +-----------------------+      +-----------------------+        
  +------------------------------+  +---------------------------+  +---------------------------+      
+----------------------------------------------------------------------------------------------------+
Encryption schemes Several AES key sizes are supported, such as 128 and 256 bits. The Cipher Block Chaining (CBC) block cipher mode of operation is used to decrypt multiple AES 16-byte blocks, which means an initialisation vector (IV) is stored in clear along with the ciphertext. All filenames and file contents are encrypted using the same encryption mode, key and IV (e.g. if you remove and re-add a file in the archive, the resulting stream will be identical). No cleartext padding is used during encryption; instead, several end-of-stream handlers are available, so the ciphertext has exactly the size of the cleartext (e.g. the size of the compressed file). The following variants were identified in the 'encryption_mode' field. STREAM This is the end-of-stream handler for: This end-of-stream handler is apparently specific to the .zed format, and applied when the cleartext's does not end on a 16-byte boundary ; in this case special processing is performed on the last partial 16-byte block. The encryption and decryption phases are identical: let's assume the last partial block of cleartext (for encryption) or ciphertext (for decryption) was appended after all the complete 16-byte blocks of ciphertext: In either case, if the full ciphertext is less then one AES block (< 16 bytes), then the IV is used instead of the second-to-last block. CTS CTS or CipherText Stealing is the end-of-stream handler for: It matches the CBC-CS3 variant as described in Recommendation for Block Cipher Modes of Operation: Three Variants of Ciphertext Stealing for CBC Mode. Empty cleartext Since empty filenames or metadata are invalid, and since all files are compressed (resulting in a minimum 8-byte zlib cleartext), no empty cleartext was encrypted in the archive. metadata stream It is named 05356861616161716149656b7a6565636e576a33317a7868304e63 (hexadecimal), i.e. the character with code 5 followed by '5haaaaqaIekzeecnWj31zxh0Nc' (ASCII). The format used is OLE Property Set (MS-OLEPS). It introduces 2 property names "_ctlfile" (index 3) and "_catalog" (index 4), and 2 instances of said properties each containing an application-specific VT_BLOB (type 0x0041). _ctlfile: obfuscated global properties and access list This subpart is stored under index 3 ("_ctlfile") of the MS-OLEPS metadata. It consists of: The ciphertext is encrypted with AES-CBC "STREAM" mode using 128-bit static key 37F13CF81C780AF26B6A52654F794AEF (hexadecimal) and the prepended IV so as to obfuscate the access list. The ciphertext is continuous and not split in chunks (unlike files), even when it is larger than 512 bytes. The decrypted text contain properties in a TLV format as described in _ctlfile TLV: Archives may include "mandatory" users that cannot be removed. They are typically used to add an enterprise wide recovery RSA key to all archives. Extreme care must be taken to protect these key, as it can decrypt all past archives generated from within that company. _catalog: file list This subpart is stored under index 4 ("_catalog") of the MS-OLEPS metadata. It contains a series of 'fileprops' TLV structures, one for each file or directory. The file hierarchy can be reconstructed by checking the 'parent_id' field of each file entry. If 'parent_id' is 0 then the file is located at the top-level of the hierarchy, otherwise it's located under the directory with the matching 'file_id'. TLV format This format is a series of fields : Value semantics depend on its Type. It may contain an uint32be integer, a UTF-16LE string, a character sequence, or an inner TLV structure. Unless otherwise noted, TLV structures appear once. Some fields are optional and may not be present at all (e.g. 'archive_createdwith'). Some fields are unique within a structure (e.g. 'files_iv'), other may be repeated within a structure to form a list (e.g. 'fileprops' and 'passworduser'). The following top-level types that have been identified, and detailed in the next sections: Some additional unidentified types may be present. _ctlfile TLV _catalog TLV Decrypting the archive AES key rsauser The user accessing the archive will be authenticated by comparing his/her X509 certificate with the one stored in the 'certificate' field using DER format. The 'files_key_ciphertext' field is then decrypted using the PKCS#1 v1.5 encryption mechanism, with the private key that matches the user certificate. passworduser An intermediary user key, a user IV and an integrity checksum will be derived from the user password, using the deprecated PKCS#12 method as described at rfc7292 appendix B. Note: this is not PKCS#5 (nor PBKDF1/PBKDF2), this is an incompatible method from PKCS#12 that notably does not use HMAC. The 'pkcs12_hashfunc' field defines the underlying hash function. The following values have been identified: PBA - Password-based authentication The user accessing the archive will be authenticated by deriving an 8-byte sequence from his/her password. The parameters for the derivation function are: The derivation is checked against 'pba_checksum'. PBE - Password-based encryption Once the user is identified, 2 new values are derived from the password with different parameters to produce the IV and the key decryption key, with the same hash function: The parameters specific to user key are: The user key needs to be truncated to a length of 'encryption_strength', as specified in bytes in the archive properties. The parameters specific to user IV are: Once the key decryption key and the IV are derived, 'files_key_ciphertext' is decrypted using AES CBC, with PKCS#7 padding. Identifying file streams The name of the MS-CFB stream is derived by shuffling the bytes from the 'file_id' field and then encoding the result as hexadecimal. The reordering is:
Initial  offset: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Shuffled offset: 3 2 1 0 5 4 7 6 8 9 10 11 12 13 14 15
The 16th byte is usually a NUL byte, hence the stream identifier is a 30-character-long string. Decrypting files The compressed stream is split in chunks of 512 bytes, each of them encrypted separately using AES CBS and the global archive encryption scheme. Decryption uses the global AES key (retrieved using the user credentials), and the global IV (retrieved from the deobfuscated archive metadata). The IV for each chunk is computed by: Each chunk is an independent stream and the decryption process involves end-of-stream handling even if this is not the end of the actual file. This is particularly important for the CTS handler. Note: this is not to be confused with CTR block cipher mode of operation with operates differently and requires a nonce. Decompressing files Compressed streams are zlib stream with default compression options and can be decompressed following the zlib format. Test cases Excluded for brevity, cf. https://www.beuc.net/zed/#test-cases. Conventions and references Feedback Feel free to send comments at beuc@beuc.net. If you have .zed files that you think are not covered by this document, please send them as well (replace sensitive files with other ones). The author's GPG key can be found at 8FF1CB6E8D89059F. Copyright (C) 2017 Sylvain Beucler Copying and distribution of this file, with or without modification, are permitted in any medium without royalty provided the copyright notice and this notice are preserved. This file is offered as-is, without any warranty.

Sylvain Beucler: dot-zed archive file format

TL,DR: I reverse-engineered the .zed encrypted archive format.
Following a clean-room design, I'm providing a description that can be implemented by a third-party.
Interested? :) (reference version at: https://www.beuc.net/zed/) .zed archive file format Introduction Archives with the .zed extension are conceptually similar to an encrypted .zip file. In addition to a specific format, .zed files support multiple users: files are encrypted using the archive master key, which itself is encrypted for each user and/or authentication method (password, RSA key through certificate or PKCS#11 token). Metadata such as filenames is partially encrypted. .zed archives are used as stand-alone or attached to e-mails with the help of a MS Outlook plugin. A variant, which is not covered here, can encrypt/decrypt MS Windows folders on the fly like ecryptfs. In the spirit of academic and independent research this document provides a description of the file format and encryption algorithms for this encrypted file archive. See the conventions section for conventions and acronyms used in this document. Structure overview The .zed file format is composed of several layers. Or as a diagram:
+----------------------------------------------------------------------------------------------------+
  .zed archive (MS-CBF)                                                                               
                                                                                                      
   stream #1                         stream #2                       stream #3...                     
  +------------------------------+  +---------------------------+  +---------------------------+      
    metadata (MS-OLEPS)               encryption (AES)               encryption (AES)                 
                                      512-bytes chunks               512-bytes chunks                 
    +--------------------------+                                                                      
      obfuscation (static key)        +-----------------------+      +-----------------------+        
      +----------------------+       -  compression (zlib)     -    -  compression (zlib)     -       
       _ctlfile (TLV)                                                                            ...  
      +----------------------+          +---------------+              +---------------+               
    +--------------------------+          file contents                  file contents                
                                                                                                      
    +--------------------------+     -  +---------------+      -    -  +---------------+      -       
      _catalog (TLV)                                                                                  
    +--------------------------+      +-----------------------+      +-----------------------+        
  +------------------------------+  +---------------------------+  +---------------------------+      
+----------------------------------------------------------------------------------------------------+
Encryption schemes Several AES key sizes are supported, such as 128 and 256 bits. The Cipher Block Chaining (CBC) block cipher mode of operation is used to decrypt multiple AES 16-byte blocks, which means an initialisation vector (IV) is stored in clear along with the ciphertext. All filenames and file contents are encrypted using the same encryption mode, key and IV (e.g. if you remove and re-add a file in the archive, the resulting stream will be identical). No cleartext padding is used during encryption; instead, several end-of-stream handlers are available, so the ciphertext has exactly the size of the cleartext (e.g. the size of the compressed file). The following variants were identified in the 'encryption_mode' field. STREAM This is the end-of-stream handler for: This end-of-stream handler is apparently specific to the .zed format, and applied when the cleartext's does not end on a 16-byte boundary ; in this case special processing is performed on the last partial 16-byte block. The encryption and decryption phases are identical: let's assume the last partial block of cleartext (for encryption) or ciphertext (for decryption) was appended after all the complete 16-byte blocks of ciphertext: In either case, if the full ciphertext is less then one AES block (< 16 bytes), then the IV is used instead of the second-to-last block. CTS CTS or CipherText Stealing is the end-of-stream handler for: It matches the CBC-CS3 variant as described in Recommendation for Block Cipher Modes of Operation: Three Variants of Ciphertext Stealing for CBC Mode. Empty cleartext Since empty filenames or metadata are invalid, and since all files are compressed (resulting in a minimum 8-byte zlib cleartext), no empty cleartext was encrypted in the archive. metadata stream It is named 05356861616161716149656b7a6565636e576a33317a7868304e63 (hexadecimal), i.e. the character with code 5 followed by '5haaaaqaIekzeecnWj31zxh0Nc' (ASCII). The format used is OLE Property Set (MS-OLEPS). It introduces 2 property names "_ctlfile" (index 3) and "_catalog" (index 4), and 2 instances of said properties each containing an application-specific VT_BLOB (type 0x0041). _ctlfile: obfuscated global properties and access list This subpart is stored under index 3 ("_ctlfile") of the MS-OLEPS metadata. It consists of: The ciphertext is encrypted with AES-CBC "STREAM" mode using 128-bit static key 37F13CF81C780AF26B6A52654F794AEF (hexadecimal) and the prepended IV so as to obfuscate the access list. The ciphertext is continuous and not split in chunks (unlike files), even when it is larger than 512 bytes. The decrypted text contain properties in a TLV format as described in _ctlfile TLV: Archives may include "mandatory" users that cannot be removed. They are typically used to add an enterprise wide recovery RSA key to all archives. Extreme care must be taken to protect these key, as it can decrypt all past archives generated from within that company. _catalog: file list This subpart is stored under index 4 ("_catalog") of the MS-OLEPS metadata. It contains a series of 'fileprops' TLV structures, one for each file or directory. The file hierarchy can be reconstructed by checking the 'parent_id' field of each file entry. If 'parent_id' is 0 then the file is located at the top-level of the hierarchy, otherwise it's located under the directory with the matching 'file_id'. TLV format This format is a series of fields : Value semantics depend on its Type. It may contain an uint32be integer, a UTF-16LE string, a character sequence, or an inner TLV structure. Unless otherwise noted, TLV structures appear once. Some fields are optional and may not be present at all (e.g. 'archive_createdwith'). Some fields are unique within a structure (e.g. 'files_iv'), other may be repeated within a structure to form a list (e.g. 'fileprops' and 'passworduser'). The following top-level types that have been identified, and detailed in the next sections: Some additional unidentified types may be present. _ctlfile TLV _catalog TLV Decrypting the archive AES key rsauser The user accessing the archive will be authenticated by comparing his/her X509 certificate with the one stored in the 'certificate' field using DER format. The 'files_key_ciphertext' field is then decrypted using the PKCS#1 v1.5 encryption mechanism, with the private key that matches the user certificate. passworduser An intermediary user key, a user IV and an integrity checksum will be derived from the user password, using the deprecated PKCS#12 method as described at rfc7292 appendix B. Note: this is not PKCS#5 (nor PBKDF1/PBKDF2), this is an incompatible method from PKCS#12 that notably does not use HMAC. The 'pkcs12_hashfunc' field defines the underlying hash function. The following values have been identified: PBA - Password-based authentication The user accessing the archive will be authenticated by deriving an 8-byte sequence from his/her password. The parameters for the derivation function are: The derivation is checked against 'pba_checksum'. PBE - Password-based encryption Once the user is identified, 2 new values are derived from the password with different parameters to produce the IV and the key decryption key, with the same hash function: The parameters specific to user key are: The user key needs to be truncated to a length of 'encryption_strength', as specified in bytes in the archive properties. The parameters specific to user IV are: Once the key decryption key and the IV are derived, 'files_key_ciphertext' is decrypted using AES CBC, with PKCS#7 padding. Identifying file streams The name of the MS-CFB stream is derived by shuffling the bytes from the 'file_id' field and then encoding the result as hexadecimal. The reordering is:
Initial  offset: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Shuffled offset: 3 2 1 0 5 4 7 6 8 9 10 11 12 13 14 15
The 16th byte is usually a NUL byte, hence the stream identifier is a 30-character-long string. Decrypting files The compressed stream is split in chunks of 512 bytes, each of them encrypted separately using AES CBS and the global archive encryption scheme. Decryption uses the global AES key (retrieved using the user credentials), and the global IV (retrieved from the deobfuscated archive metadata). The IV for each chunk is computed by: Each chunk is an independent stream and the decryption process involves end-of-stream handling even if this is not the end of the actual file. This is particularly important for the CTS handler. Note: this is not to be confused with CTR block cipher mode of operation with operates differently and requires a nonce. Decompressing files Compressed streams are zlib stream with default compression options and can be decompressed following the zlib format. Test cases Excluded for brevity, cf. https://www.beuc.net/zed/#test-cases. Conventions and references Feedback Feel free to send comments at beuc@beuc.net. If you have .zed files that you think are not covered by this document, please send them as well (replace sensitive files with other ones). The author's GPG key can be found at 8FF1CB6E8D89059F. Copyright (C) 2017 Sylvain Beucler Copying and distribution of this file, with or without modification, are permitted in any medium without royalty provided the copyright notice and this notice are preserved. This file is offered as-is, without any warranty.

31 August 2017

Chris Lamb: Free software activities in August 2017

Here is my monthly update covering what I have been doing in the free software world in August 2017 (previous month):
Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed pre-compiled to end users. The motivation behind the Reproducible Builds effort is to allow verification that no flaws have been introduced either maliciously or accidentally during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised. I have generously been awarded a grant from the Core Infrastructure Initiative to fund my work in this area. This month I:
  • Presented a status update at Debconf17 in Montr al, Canada alongside Holger Levsen, Maria Glukhova, Steven Chamberlain, Vagrant Cascadian, Valerie Young and Ximin Luo.
  • I worked on the following issues upstream:
    • glib2.0: Please make the output of gio-querymodules reproducible. (...)
    • gcab: Please make the output reproducible. (...)
    • gtk+2.0: Please make the immodules.cache files reproducible. (...)
    • desktop-file-utils: Please make the output reproducible. (...)
  • Within Debian:
  • Categorised a large number of packages and issues in the Reproducible Builds "notes" repository.
  • Worked on publishing our weekly reports. (#118, #119, #120, #121 & #122)

I also made the following changes to our tooling:
diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues.

  • Use name attribute over path to avoid leaking comparison full path in output. (commit)
  • Add missing skip_unless_module_exists import. (commit)
  • Tidy diffoscope.progress and the XML comparator (commit, commit)

disorderfs

disorderfs is our FUSE-based filesystem that deliberately introduces non-determinism into directory system calls in order to flush out reproducibility issues.

  • Add a simple autopkgtest smoke test. (commit)


Debian
Patches contributed
  • openssh: Quote the IP address in ssh-keygen -f suggestions. (#872643)
  • libgfshare:
    • SIGSEGV if /dev/urandom is not accessible. (#873047)
    • Add bindnow hardening. (#872740)
    • Support nodoc build profile. (#872739)
  • devscripts:
  • memcached: Add hardening to systemd .service file. (#871610)
  • googler: Tidy long and short package descriptions. (#872461)
  • gnome-split: Homepage points to domain-parked website. (#873037)

Uploads
  • python-django 1:1.11.4-1 New upstream release.
  • redis:
    • 4:4.0.1-3 Drop yet more non-deterministic tests.
    • 4:4.0.1-4 Tighten systemd/seccomp hardening.
    • 4:4.0.1-5 Drop even more tests with timing issues.
    • 4:4.0.1-6 Don't install completions to /usr/share/bash-completion/completions/debian/bash_completion/.
    • 4:4.0.1-7 Don't let sentinel integration tests fail the build as they use too many timers to be meaningful. (#872075)
  • python-gflags 1.5.1-3 If SOURCE_DATE_EPOCH is set, either use that as a source of current dates or the UTC-version of the file's modification time (#836004), don't call update-alternatives --remove in postrm. update debian/watch/Homepage & refresh/tidy the packaging.
  • bfs 1.1.1-1 New upstream release, tidy autopkgtest & patches, organising the latter with Pq-Topic.
  • python-daiquiri 1.2.2-1 New upstream release, tidy autopkgtests & update travis.yml from travis.debian.net.
  • aptfs 2:0.10-2 Add upstream signing key, refer to /usr/share/common-licenses/GPL-3 in debian/copyright & tidy autopkgtests.
  • adminer 4.3.1-2 Add a simple autopkgtest & don't install the Selenium-based tests in the binary package.
  • zoneminder (1.30.4+dfsg-2) Prevent build failures with GCC 7 (#853717) & correct example /etc/fstab entries in README.Debian (#858673).

Finally, I reviewed and sponsored uploads of astral, inflection, more-itertools, trollius-redis & wolfssl.

Debian LTS

This month I have been paid to work 18 hours on Debian Long Term Support (LTS). In that time I did the following:
  • "Frontdesk" duties, triaging CVEs, etc.
  • Issued DLA 1049-1 for libsndfile preventing a remote denial of service attack.
  • Issued DLA 1052-1 against subversion to correct an arbitrary code execution vulnerability.
  • Issued DLA 1054-1 for the libgxps XML Paper Specification library to prevent a remote denial of service attack.
  • Issued DLA 1056-1 for cvs to prevent a command injection vulnerability.
  • Issued DLA 1059-1 for the strongswan VPN software to close a denial of service attack.

Debian bugs filed
  • wget: Please hash the hostname in ~/.wget-hsts files. (#870813)
  • debian-policy: Clarify whether mailing lists in Maintainers/Uploaders may be moderated. (#871534)
  • git-buildpackage: "pq export" discards text within square brackets. (#872354)
  • qa.debian.org: Escape HTML in debcheck before outputting. (#872646)
  • pristine-tar: Enable multithreaded compression in pristine-xz. (#873229)
  • tryton-meta: Please combine tryton-modules-* into a single source package with multiple binaries. (#873042)
  • azure-cli:
  • fwupd-tests: Don't ship test files to generic /usr/share/installed-tests dir. (#872458)
  • libvorbis: Maintainer fields points to a moderated mailing list. (#871258)
  • rmlint-gui: Ship a rmlint-gui binary. (#872162)
  • template-glib: debian/copyright references online source without quotation. (#873619)

FTP Team

As a Debian FTP assistant I ACCEPTed 147 packages: abiword, adacgi, adasockets, ahven, animal-sniffer, astral, astroidmail, at-at-clojure, audacious, backdoor-factory, bdfproxy, binutils, blag-fortune, bluez-qt, cheshire-clojure, core-match-clojure, core-memoize-clojure, cypari2, data-priority-map-clojure, debian-edu, debian-multimedia, deepin-gettext-tools, dehydrated-hook-ddns-tsig, diceware, dtksettings, emacs-ivy, farbfeld, gcc-7-cross-ports, git-lfs, glewlwyd, gnome-recipes, gnome-shell-extension-tilix-dropdown, gnupg2, golang-github-aliyun-aliyun-oss-go-sdk, golang-github-approvals-go-approval-tests, golang-github-cheekybits-is, golang-github-chzyer-readline, golang-github-denverdino-aliyungo, golang-github-glendc-gopher-json, golang-github-gophercloud-gophercloud, golang-github-hashicorp-go-rootcerts, golang-github-matryer-try, golang-github-opentracing-contrib-go-stdlib, golang-github-opentracing-opentracing-go, golang-github-tdewolff-buffer, golang-github-tdewolff-minify, golang-github-tdewolff-parse, golang-github-tdewolff-strconv, golang-github-tdewolff-test, golang-gopkg-go-playground-validator.v8, gprbuild, gsl, gtts, hunspell-dz, hyperlink, importmagic, inflection, insighttoolkit4, isa-support, jaraco.itertools, java-classpath-clojure, java-jmx-clojure, jellyfish1, lazymap-clojure, libblockdev, libbytesize, libconfig-zomg-perl, libdazzle, libglvnd, libjs-emojify, libjwt, libmysofa, libundead, linux, lua-mode, math-combinatorics-clojure, math-numeric-tower-clojure, mediagoblin, medley-clojure, more-itertools, mozjs52, openssh-ssh1, org-mode, oysttyer, pcscada, pgsphere, poppler, puppetdb, py3status, pycryptodome, pysha3, python-cliapp, python-coloredlogs, python-consul, python-deprecation, python-django-celery-results, python-dropbox, python-fswrap, python-hbmqtt, python-intbitset, python-meshio, python-parameterized, python-pgpy, python-py-zipkin, python-pymeasure, python-thriftpy, python-tinyrpc, python-udatetime, python-wither, python-xapp, pythonqt, r-cran-bit, r-cran-bit64, r-cran-blob, r-cran-lmertest, r-cran-quantmod, r-cran-ttr, racket-mode, restorecond, rss-bridge, ruby-declarative, ruby-declarative-option, ruby-errbase, ruby-google-api-client, ruby-rash-alt, ruby-representable, ruby-test-xml, ruby-uber, sambamba, semodule-utils, shimdandy, sjacket-clojure, soapysdr, stencil-clojure, swath, template-glib, tools-analyzer-jvm-clojure, tools-namespace-clojure, uim, util-linux, vim-airline, vim-airline-themes, volume-key, wget2, xchat, xfce4-eyes-plugin & xorg-gtest. I additionally filed 6 RC bugs against packages that had incomplete debian/copyright files against: gnome-recipes, golang-1.9, libdazzle, poppler, python-py-zipkin & template-glib.

30 June 2017

Arturo Borrero Gonz lez: About the OutlawCountry Linux malware

netfilter_predator Today I noticed the internet buzz about a new alleged Linux malware called OutlawCountry by the CIA, and leaked by Wikileaks. The malware redirects traffic from the victim to a control server in order to spy or whatever. To redirect this traffic, they use simple Netfilter NAT rules injected in the kernel. According to many sites commenting on the issue, is seems that there is something wrong with the Linux kernel Netfilter subsystem, but I read the leaked docs, and what they do is to load a custom kernel module in order to be able to load Netfilter NAT table/rules with more priority than the default ones (overriding any config the system may have). Isn t that clear? The attacker is loading a custom kernel module as root in your machine. They don t use Netfilter to break into your system. The problem is not Netfilter, the problem is your whole machine being under their control. With root control of the machine, they could simply use any mechanism, like kpatch or whatever, to replace your whole running kernel with a new one, with full access to memory, networking, file system et al. They probably use a rootkit or the like to take over the system.

5 June 2017

Clint Adams: nibus daisens

What s up with narcissists and sexual predators frequently reapplying their lipstick?
Posted on 2017-06-05
Tags: ranticore

3 June 2017

Mike Hommey: Announcing git-cinnabar 0.5.0 beta 1

Git-cinnabar is a git remote helper to interact with mercurial repositories. It allows to clone, pull and push from/to mercurial remote repositories, using git. Get it on github. These release notes are also available on the git-cinnabar wiki. What s new since 0.4.0?

Next.

Previous.